205 research outputs found

    Knowledge-based graphical interfaces for presenting technical information

    Get PDF
    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented

    Interactive Multimedia Explanation for Equipment Maintenance and Repair

    Get PDF
    COMET (COordinated Multimedia Explanation Testbed) is a research system that we are developing to explore the coordinated generation of multimedia explanations of equipment maintenance and repair procedures. The form and content of all material presented is generated interactively, with an emphasis on coordinating multiple media to allow cross-references between media and to make possible display layout that reflects the fine-grain relationships among the material presented. COMET's architecture includes multiple static and dynamic knowledge sources, a content planner, a media coordinator, media generators (currently text and graphics), and a media layout manager. Examples are given of the kinds of material processed and produced by each of the components

    Coordinating Text and Graphics in Explanation Generation

    Get PDF
    To generate multimedia explanations, a system must be able to coordinate the use of different media in a single explanation. In this paper, we present an architecture that we have developed for COMET (COordinated Multimedia Explanation Testbed), a system that generates directions for equipment maintenance and repair, and we show how it addresses the coordination problem. In particular, we focus on the use of a single content planner that produces a common content description used by multiple media-specific generators, a media coordinator that makes a f'me-grained division of information between media, and bidirectional interaction between media-specific generators to allow influence across media.

    Interaction and presentation techniques for shake menus in tangible augmented reality

    Full text link
    Menus play an important role in both information presentation and system control. We explore the design space of shake menus, which are intended for use in tangible augmented reality. Shake menus are radial menus displayed centered on a physical object and activated by shaking that object. One important aspect of their design space is the coordinate system used to present menu op-tions. We conducted a within-subjects user study to compare the speed and efficacy of several alternative methods for presenting shake menus in augmented reality (world-referenced, display-referenced, and object-referenced), along with a baseline tech-nique (a linear menu on a clipboard). Our findings suggest trade-offs amongst speed, efficacy, and flexibility of interaction, and point towards the possible advantages of hybrid approaches that compose together transformations in different coordinate systems. We close by describing qualitative feedback from use and present several illustrative applications of the technique

    View management for virtual and augmented reality

    Get PDF

    Cross-Dimensional Gestural Interaction Techniques for Hybrid Immersive Environments

    Get PDF
    We present a set of interaction techniques for a hybrid user interface that integrates existing 2D and 3D visualization and interaction devices. Our approach is built around one- and two-handed gestures that support the seamless transition of data between co-located 2D and 3D contexts. Our testbed environment combines a 2D multi-user, multi-touch, projection surface with 3D head-tracked, see-through, head-worn displays and 3D tracked gloves to form a multi-display augmented reality. We also address some of the ways in which we can interact with private data in a collaborative, heterogeneous workspace
    corecore